7 research outputs found

    Localization of Diagnostically Relevant Regions of Interest in Whole Slide Images: a Comparative Study

    Get PDF
    Whole slide digital imaging technology enables researchers to study pathologists’ interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists’ actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors. © 2016, Society for Imaging Informatics in Medicine

    Validating item response processes in digital competence assessment through eye-tracking techniques

    No full text
    This paper reports on an exploratory study with the aim to validate item response processes in digital competence assessment through eye-tracking techniques. When measuring complex cognitive constructs, it is crucial to correctly design the evaluation items to trigger the intended knowledge and skills. Furthermore, to assess the validity of a test requires considering not only the content of the evaluation tasks involved in the test, but also whether examinees respond to the tasks by engaging construct-relevant response processes. The eye tracking observations helped to fill an ‘explanatory gap’ by providing data on variation in item response processes that are not captured by other sources of process data such as think aloud protocols or computer-generated log files. We proposed a set of metrics that could help test designers to validate the different item formats used in the evaluation of digital competence. The gaze data provided detailed information on test item response strategies, enabling profiling of examinee engagement and response processes associated with successful performance. There were notable differences between the participants who correctly solved the tasks and those who failed, in terms of the time spent on solving them, as well as the data on their gazes. Moreover, this included insights into response processes which contributed to the validation of the assessment criteria of each item
    corecore